skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Bradley, Elizabeth"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Scaling regions—intervals on a graph where the dependent variable depends linearly on the independent variable—abound in dynamical systems, notably in calculations of invariants like the correlation dimension or a Lyapunov exponent. In these applications, scaling regions are generally selected by hand, a process that is subjective and often challenging due to noise, algorithmic effects, and confirmation bias. In this paper, we propose an automated technique for extracting and characterizing such regions. Starting with a two-dimensional plot—e.g., the values of the correlation integral, calculated using the Grassberger–Procaccia algorithm over a range of scales—we create an ensemble of intervals by considering all possible combinations of end points, generating a distribution of slopes from least squares fits weighted by the length of the fitting line and the inverse square of the fit error. The mode of this distribution gives an estimate of the slope of the scaling region (if it exists). The end points of the intervals that correspond to the mode provide an estimate for the extent of that region. When there is no scaling region, the distributions will be wide and the resulting error estimates for the slope will be large. We demonstrate this method for computations of dimension and Lyapunov exponent for several dynamical systems and show that it can be useful in selecting values for the parameters in time-delay reconstructions. 
    more » « less
  2. null (Ed.)
  3. null (Ed.)
    Current operational forecasts of solar eruptions are made by human experts using a combination of qualitative shape-based classification systems and historical data about flaring frequencies. In the past decade, there has been a great deal of interest in crafting machine-learning (ML) flare-prediction methods to extract underlying patterns from a training set – e.g. a set of solar magnetogram images, each characterized by features derived from the magnetic field and labeled as to whether it was an eruption precursor. These patterns, captured by various methods (neural nets, support vector machines, etc.), can then be used to classify new images. A major challenge with any ML method is the featurization of the data: pre-processing the raw images to extract higher-level properties, such as characteristics of the magnetic field, that can streamline the training and use of these methods. It is key to choose features that are informative, from the standpoint of the task at hand. To date, the majority of ML-based solar eruption methods have used physics-based magnetic and electric field features such as the total unsigned magnetic flux, the gradients of the fields, the vertical current density, etc. In this paper, we extend the relevant feature set to include characteristics of the magnetic field that are based purely on the geometry and topology of 2D magnetogram images and show that this improves the prediction accuracy of a neural-net based flare-prediction method. 
    more » « less
  4. null (Ed.)
  5. Paleoclimate records are rich sources of information about the past history of the Earth system. Information theory provides a new means for studying these records. We demonstrate that weighted permutation entropy of water-isotope data from the West Antarctica Ice Sheet (WAIS) Divide ice core reveals meaningful climate signals in this record. We find that this measure correlates with accumulation (meters of ice equivalent per year) and may record the influence of geothermal heating effects in the deepest parts of the core. Dansgaard-Oeschger and Antarctic Isotope Maxima events, however, do not appear to leave strong signatures in the information record, suggesting that these abrupt warming events may actually be predictable features of the climate’s dynamics. While the potential power of information theory in paleoclimatology is significant, the associated methods require well-dated and high-resolution data. The WAIS Divide core is the first paleoclimate record that can support this kind of analysis. As more high-resolution records become available, information theory could become a powerful forensic tool in paleoclimate science. 
    more » « less
  6. Permutation entropy techniques can be useful for identifying anomalies in paleoclimate data records, including noise, outliers, and post-processing issues. We demonstrate this using weighted and unweighted permutation entropy with water-isotope records containing data from a deep polar ice core. In one region of these isotope records, our previous calculations (See Garland et al. 2018)revealed an abrupt change in the complexity of the traces: specifically, in the amount of new information that appeared at every time step. We conjectured that this effect was due to noise introduced by an older laboratory instrument. In this paper, we validate that conjecture by reanalyzing a section of the ice core using a more advanced version of the laboratory instrument. The anomalous noise levels are absent from the permutation entropy traces of the new data. In other sections of the core, we show that permutation entropy techniques can be used to identify anomalies in the data that are not associated with climatic or glaciological processes, but rather effects occurring during field work, laboratory analysis, or data post-processing. These examples make it clear that permutation entropy is a useful forensic tool for identifying sections of data that require targeted reanalysis—and can even be useful for guiding that analysis. 
    more » « less